disney research
AMOR: Adaptive Character Control through Multi-Objective Reinforcement Learning
Alegre, Lucas N., Serifi, Agon, Grandia, Ruben, Müller, David, Knoop, Espen, Bächer, Moritz
Reinforcement learning (RL) has significantly advanced the control of physics-based and robotic characters that track kinematic reference motion. However, methods typically rely on a weighted sum of conflicting reward functions, requiring extensive tuning to achieve a desired behavior. Due to the computational cost of RL, this iterative process is a tedious, time-intensive task. Furthermore, for robotics applications, the weights need to be chosen such that the policy performs well in the real world, despite inevitable sim-to-real gaps. To address these challenges, we propose a multi-objective reinforcement learning framework that trains a single policy conditioned on a set of weights, spanning the Pareto front of reward trade-offs. Within this framework, weights can be selected and tuned after training, significantly speeding up iteration time. We demonstrate how this improved workflow can be used to perform highly dynamic motions with a robot character. Moreover, we explore how weight-conditioned policies can be leveraged in hierarchical settings, using a high-level policy to dynamically select weights according to the current task. We show that the multi-objective policy encodes a diverse spectrum of behaviors, facilitating efficient adaptation to novel tasks.
- North America > United States > California > Los Angeles County > Los Angeles (0.28)
- Europe > Switzerland > Zürich > Zürich (0.15)
- North America > United States > New York > New York County > New York City (0.14)
- (15 more...)
AI humanoid robot learns to mimic human emotions and behavior
Ready for a robot that not only looks human but also acts and reacts like one, expressing emotions like shyness, excitement or friendliness? Disney Research, the innovation powerhouse behind The Walt Disney Company, has turned this into reality. Its latest creation is an autonomous humanoid robot that can mimic human emotions and behaviors in real time. Think of it as a real-life WALL-E, but with even more personality. This groundbreaking robot uses advanced artificial intelligence to replicate natural gestures and deliberate actions with striking accuracy.
Flexible expressions could lift 3D-generated faces out of the uncanny valley – TechCrunch
Disney Research is working on ways to smooth out this process, among them a machine learning tool that makes it much easier to generate and manipulate 3D faces without dipping into the uncanny valley. Of course this technology has come a long way from the wooden expressions and limited details of earlier days. High-resolution, convincing 3D faces can be animated quickly and well, but the subtleties of human expression are not just limitless in variety, they're very easy to get wrong. Think of how someone's entire face changes when they smile -- it's different for everyone, but there are enough similarities that we fancy we can tell when someone is "really" smiling or just faking it. How can you achieve that level of detail in an artificial face?
Disney's Robot Eyeballs Have A Freakishly Human-Like Stare
Disney Research, the R&D wing of everyone's favorite Hollywood House of Mouse, has built a robot with uncanny valley-defying gaze interaction that's so realistic you'll be convinced that you're looking at a real person. Audio-Animatronics are Disney's name for animatronic robots, created by Walt Disney Imagineering, that both move and make sounds in synchronized fashion. This latest update means that Disney's robots could potentially lock eyes with visitors and follow them around with their gaze. Depending on the animatronic model this was incorporated into, that could either create an emotional connection with the guest or, potentially, intimidate the bejesus out of them. "Eye gaze is a significant part of the interactions between people, quite a bit of information is conveyed through movements of the eyes," Matthew Pan, a postdoctoral associate at Disney Research, told Digital Trends.
Neural Networks Model Audience Reactions to Movies
Engineers have created a new deep-learning software capable of assessing complex audience reactions to movies using the viewer's facial expressions. Developed by Disney Research in collaboration with Yisong Yue of Caltech and colleagues at Simon Fraser University, the software relies on a new algorithm known as factorized variational autoencoders (FVAEs). Variational autoencoders use deep learning to automatically translate images of complex objects, like faces, into sets of numerical data, also known as a latent representation or encoding. The contribution of Yue and his colleagues was to train the autoencoders to incorporate metadata (pertinent information about the data being analyzed). In the parlance of the field, they used the metadata to define an encoding space that can be factorized. In this case, the factorized variational autoencoder takes images of the faces of people watching movies and breaks them down into a series of numbers representing specific features: one number for how much a face is smiling, another for how wide open the eyes are, etc. Metadata then allow the algorithm to connect those numbers with other relevant bits of data--for example, with other images of the same face taken at different points in time, or of other faces at the same point in time.
- Media > Film (0.86)
- Leisure & Entertainment (0.66)
When kids talk to robots: Enhancing engagement and learning
Conversational robots and virtual characters can enhance learning and expand entertainment options for children, a trio of studies by Disney Research shows, though exactly how these autonomous agents interact with children sometimes depends on a child's age. Pre-school children responding to an on-screen character, for instance, may be happiest if the character simply waits for their responses or repeats a question. Older children talking with a robot, on the other hand, appreciate it when the robot references their previous conversations, while younger children are just as happy if the robot treats each conversation as a new encounter. "Teasing out these nuances is necessary if we are to make the interactions between automated characters and children as engaging as possible," said Jill Fain Lehman, senior research scientist. Lehman and other staff members of Disney Research will present findings from the three studies at the Interaction Design and Children Conference in Palo Alto, Calif., June 27-30. "Though parent-child interaction remains the most important factor in child development, the prospect of automated characters that can interact with children offers exciting opportunities for further enhancing learning and play," said Markus Gross, vice president at Disney Research.
- North America > United States > California > Santa Clara County > Palo Alto (0.25)
- Pacific Ocean > North Pacific Ocean > San Francisco Bay (0.05)
- North America > United States > California > San Francisco County > San Francisco (0.05)
- (2 more...)
- Media > Film (0.50)
- Leisure & Entertainment (0.50)
Disney's spray-painting drone could end the need for scaffolding
We've seen some pretty interesting work come out of Disney Research in the past, like techniques for digitally recreating teeth, makeup-projecting lamps, a group AR experience and a stick-like robot that can perform backflips. One of its latest projects is PaintCopter -- a drone that can autonomously spray paint both flat and 3D surfaces. Disney Research says the goal is to be able to paint large surfaces without the need for scaffolding and ladders. The process consists of three steps. First, the target surface is scanned and an accurate 3D map is generated.
Video Friday: Kuri Drop Test, Tensegrity Robots, and More RoboCup
Video Friday is your weekly selection of awesome robotics videos, collected by your Automaton bloggers. We'll also be posting a weekly calendar of upcoming robotics events for the next few months; here's what we have so far (send us your events!): Let us know if you have suggestions for next week, and enjoy today's videos. The emerging field of soft robotics seeks to harness these same properties in order to create resilient machines. The nature of soft materials, however, presents considerable challenges to aspects of design, construction, and control -- and up until now, the vast majority of gaits for soft robots have been hand-designed through empirical trial-and-error.
- Transportation (0.72)
- Leisure & Entertainment > Sports > Soccer (0.68)
- Information Technology > Artificial Intelligence > Robots > Autonomous Vehicles > Drones (0.48)
- Information Technology > Artificial Intelligence > Robots > Soccer Robots (0.42)
Stickman Explores the Physics of Flying Through the Air
This is a guest post. The views expressed here are solely those of the author and do not represent positions of IEEE Spectrum or the IEEE. Olympic gymnast Simone Biles has a signature move that is named after her because she is the only woman on earth capable of performing it. The move starts as a layout double flip, but more than halfway through suddenly develops a twist that rotates her body through an extra 180 degrees to land face first. The only visible source of this sudden change in rotation is a small motion of one hand as her arm goes from straight to bent.
Machine Learning: Caltech Algorithm Watches Soccer, Learns the Game - Industry Tap
Engineers at the California Institute of Technology, along with colleagues from Disney Research and STATS (a major supplier of sports data), have developed a machine learning algorithm that watches soccer matches and learns the game in much the same way that fans of the sport do. The machine learning algorithm analyzes tracking data to learn how players coordinate with one another to position themselves on the field. But it does not simply follow the movements of players, it is able to determine the meaning of the movements--something not included in the data the machine sees. The researchers used graphical models to help the machines understand the roles of the players on the field--for example, players in different roles tend to occupy different spaces on the playing field. This allows the algorithm to learn and understand what the players are doing.
- North America > United States > California (0.28)
- Asia > Thailand > Chiang Mai > Chiang Mai (0.08)